Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 Domino  OTS 

Domino One Touch Setup (OTS) domain join token

Daniel Nashed – 11 May 2025 22:56:47

Many applications support a "join token". For example Kubernetes creates a joint token which contains the connection to the existing cluster and also the authentication needed to join the cluster.
With Domino OTS we can actually implement something very similar.


If you have an existing server, you can create a OTS JSON file pointing to the existing domain and server.
The only part that is missing is some kind of authentication.

We actually have this type of authentication: the server.id. But it would be a separate file.


So here is the idea:


  • We can encode the server.id in base64 and include it into the JSON.
  • The container entrypoint.sh script decodes it, stores it on disk and patches the JSON file.

The format looks like this and matches other OTS functionality like the password prompts.

Example:


"IDFilePath":"@Base64:AQABAC4BAAAAAAAA4..."


With this type of joint token we have a single file to pass to your additional server.


How to create a OTS JSON from a server.id


If you have a server.id you can generate all other information from the server.id and your NAB:
  • There is C-API code to read the server name from the server.id
  • With the server.id you can lookup all other information from your current mail-server or let the script prompt for the right server to lookup the information.
  • This allows to generate a full OTS JSON file

I wrote a Lotus Script class to do exactly that. I have added it to the JSON generation database, which is available on the OpenNTF Net server via NRPC:


home.openntf.net/openntf-net!!nashcom/domino-ots.nsf


Maybe it would be a good idea to create a separate server registration database which would work with a cert.id or a Domino CA to have a full end to end experience.

But the most important step is to have the container image able to consume the join token style with the
@Base64: syntax.
I have just submitted the additional logic to the develop branch of the container project.


 Domino 

Configure an addtional Notes port on a server

Daniel Nashed – 10 May 2025 15:51:34

The previous blog post was more dealing with the background about having a second Notes TCP/IP port.
This post focuses to setup a new Notes port end to end using the DNUG Lab environment as an example.


The server I am configuring has two separate IP addresses on two different network cards.
But the same procedure would also work with IP addresses in the same network.


Some of the settings have to be specified via notes.ini directly.
Other settings can be configured in the UI, but result in notes.ini settings.


In my example I am using a public (159.69.82.118) and a private IP address (10.0.0.3).


The ports notes.ini setting can be managed with the port configuration in admin client - similar to how you configure your local Notes client port.

The dialog is a bit hidden in the admin client.


Open the admin client and switch to the "Configuration" tab


In the menu select "Configuration -> Server -> Setup ports ..."


Image:Configure an addtional Notes port on a server

After configuring the port the Notes.ini settings "ports" will contain two ports:


ports=TCPIP,TCPIP-LOCAL


Each port gets basic settings set automatically using the dialog.

The first line contains the settings for the port. The last part of it are option bits containing port compression and encryption settings.

The second line contains the connection timeout also specified in the UI.



TCPIP=TCP, 0, 15, 0,,45088

TCPIP_TcpConnectTimeout=0,30


TCPIP-LOCAL=TCP,0,15,0,,45056

TCPIP-LOCAL_TcpConnectTimeout=0,30



Complete Notes port settings for hosting multiple ports


Because both ports by default use port 1352, you now have to bind each port to a specific IP address.
In this case we are assigning the public IP to the standard "TCPIP" port and the local IP to Port "TCPIP-LOCAL".


The prefix for all those parameters is always the port name you selected.

It's a bit confusing because the standard port itself is named "TCPIP" as well.


TCPIP_TcpIpAddress=0,159.69.82.118

TCPIP-LOCAL_TcpIpAddress=0,10.0.0.3



Specify the port used for internet protocols


Once you have another port you want to make sure Internet traffic is send thru the external port by specifying the following notes.ini parameters.


SMTPNotesPort=TCPIP

POP3NotesPort=TCPIP

LDAPNotesPort=TCPIP

IMAPNotesPort=TCPIP



Set the cluster port


In a cluster environment you should also set the default port for cluster traffic to our new local port.


Server_Cluster_Default_Port=TCPIP-LOCAL



Check and complete port settings in server document


The server document contains a list of ports.

The driver will be filled by AdminP. But the other settings need to be completed manually.


Each port should map to a Notes named network (NNN), which should be the same for the same type of port for all servers connected very close to each other -- for example in the same LAN or cluster.

Servers in the same NNN see each other and can route mail without connection documents.


But usually I would recommend creating connection documents for each server to be in full control.



Image:Configure an addtional Notes port on a server



Start/restart ports or better restart server


Restarting a port to bind it only to one IP is tricky.

You can stop and restart ports. But usually it is easier to restart your server.



Check port availability


First  run "
show port TCPIP-LOCAL".

If the port is not yet started try to start it manually:


start port TCPIP-LOCAL


Once the port is enabled on multiple servers, you can trace the connection.

The extended syntax uses the port delimiter "!!!"

The full path to a database is
port!!!server!!db.nsf

Ports are usually omitted and the server chooses the right port or only has one port.
But this syntax also works when tracing connection


trace TCPIP-LOCAL!!!linus.lab.dnug.eu/dnug-lab




Connection documents


Connection documents contain the port to use for the connection. Usually you see "TCPIP" in those connection documents when only one port is available.

Now you can switch connection document to the new local port to use the local connection between servers.



Image:Configure an addtional Notes port on a server

TCP/IP Settings for IPv6


Running IPv6 introduces additional challenges and would be for another blog post.

But here is the basic information for IPv6, which can also assigned to separate ports.

With one Notes port you would only need to enable IPv6. But with multiple ports you will also need to bind the IP to separate ports.


https://help.hcl-software.com/domino/14.0.0/admin/plan_examplesofusingnotesinivariableswithipv6_c.html


Benefits of running domino with multiple TCP/IP ports

Daniel Nashed – 10 May 2025 12:37:49

Introduction


Support for multiple TCP/IP ports has been part of HCL Domino since the early days. Back then, it was first essential to support multiple simultaneous modem connections. It also proved valuable for clustered servers using dedicated network cards.
While today’s networks offer 1 Gbit/s or even 10 Gbit/s speeds—making multiple ports less necessary from a raw bandwidth perspective—there are still compelling reasons to use multiple Notes ports in modern environments.



Historical Context and Evolution


In the days of 10 Mbit/s Ethernet, splitting user and server traffic across different ports and network cards made a lot of sense. This was sometimes even done with dedicated network cables between servers as a private LAN connection.

It helped optimize limited bandwidth and reduce contention. While raw network speeds have improved dramatically, the architectural benefits of multiple ports remain relevant in specific scenarios.



Performance Benefits


The main advantage of using multiple Notes ports is to separate user-to-server traffic from server-to-server traffic. This separation improves performance and scalability, especially under high load.

Each port has its own listener and thread pool, which allows more granular control and scalability for NRPC (Notes Remote Procedure Call) traffic.
You can assign specific ports to different types of connections—for example, routing all cluster replication traffic through a dedicated Notes port with a separate IP address and network card.

This strategy remains highly effective in optimizing performance in Domino environments with high cluster and server activity in general.


Introducing a separate Notes port on the same network card with a separate IP address is already beneficial because the separate TCP/IP listener/queue and the dedicated thread pool to perform operations, help most.
But depending on our hardware or network setup you might already have separate network cards.



Cloud and Cost Considerations


In many cloud environments—particularly with service providers—data ingress and egress are billed separately. However, internal traffic (e.g., within a private 10.x.x.x network) is often free.
By setting up a dedicated Notes port for internal communication, you can route intra-server traffic over the private network. This approach helps reduce monthly costs while preserving performance.



Security and Performance Optimization


External-facing ports should always use encryption, and depending on your setup, enabling compression may also be beneficial.
However, for internal server-to-server connections — such as those between Citrix-hosted Notes clients and back-end servers — disabling compression and even encryption can significantly reduce CPU load and improve performance.


Of course, this optimization assumes you're operating in a trusted network environment.
Your security team must approve any unencrypted traffic. In some cases, traffic is already protected by VPN tunnels, in which case additional encryption at the Notes level may be redundant.

Having support for multiple Notes ports enables these optimizations without compromising external security policies.



Practical Example: DNUG LAB at Hetzner


In our DNUG LAB hosted at Hetzner, we implemented a dedicated internal network port for server-to-server communication using a private 10.x.x.x address.
This internal port is unencrypted and uncompressed, as it is isolated from the external network via firewall and network segmentation.

Even in a small lab environment, this setup has helped reduce costs and improve performance. All servers are configured with a second Notes port, and all connection documents point to the internal network.



Additional security for Different Ports


You can define port-specific access controls, including group-based restrictions. While network segmentation is usually sufficient, the ability to explicitly restrict who can access each port adds another layer of security.

This is particularly useful in cloud deployments or large clustered environments, where server-to-server traffic can significantly exceed typical user traffic due to just-in-time streaming replication and inter-server communication.



Important Note: Directory Assistance Configuration


Be cautious with Directory Assistance (DA) configurations. If you specify a remote server for DA, it may use remote databases by default. This introduces additional load and creates potential failover issues.

To force DA to use a local replica, enter a single asterisk (*) in the server name field. This instructs Domino to always use the local copy, avoiding unnecessary inter-server traffic—even if both servers are in the same data center.



Conclusion


Domino has supported multiple network ports since its inception, and they still offer distinct advantages in specific scenarios.

For most standard servers, a single port is sufficient. But for large clusters, hosted environments, or cost-sensitive cloud deployments, using separate Notes ports can greatly enhance performance, optimize traffic routing, and reduce operational costs.


A follow-up post will walk through the steps to configure a separate Notes port. This article focused on the "why" — next, I will dive into the "how."



 OCR  Tika 

tesseract -- Teaching Tika to read image formats

Daniel Nashed – 10 May 2025 09:37:14

While looking into what I can do with Domino IQ I add some experiments with LLMs in general.

Domino IQ out of the box today not support images.
But sending images to an LLM might even not be the best option you have depending on your use case.


If you are looking for real image processing a visual model might be a good choice.

But in many cases you are looking for "text processing" in images. For example when processing invoice scans or similar documents.


There is something available for a very long time called "OCR", which we might have forgotten during the AI hype.

It turns out that OCR in combination could be a way more efficient and effective way to get text out of images.


So the idea is to pre-process images before running an LLM query.


Domino uses Apache Tika in the background to extract data from many formats.

But out of the box Tika cannot process images.



Tesseract an interesting project with a long history


It turns out that there is a free package which works stand-alone on command line, as C/C++ based lib to include in your applications and also integrates into Tika.

In fact it is well integrated into Tika even being an own project.


One of the reasons it is separate is that it does not fit the Apache Java based project.

But Tika automatically detects it when installed and it is included for example into the Ubuntu distribution.


You find details about the project here -->
https://tesseract-ocr.github.io
But let me show you how simple it is to use it.


Once you have it installed on Linux, you can just run it from command-line.

The command-line is pretty simple. You specify the file and an output text file name.


Example:


tesseract invoice.png invoice
cat invoice.txt


Tika directly integrates with it and finds it once installed.


Notes/Domino leverages Apache Tika in the background running it on local host.
You should not try to use the Domino Tika instance, because it is controlled by the full-text index back-end of Domino and is started and stopped as needed.

But you can start your own Tika instance.


You can either download the latest version or use the one included in Domino.

In this example I am downloading it manually before running it.



Running Tika stand-alone


curl -L
https://dlcdn.apache.org/tika/3.1.0/tika-server-standard-3.1.0.jar -o tika-server.jar
java -jar tika-server.jar > tika.log 2>&1 &


Tika provides a simple REST based API. Notes/Domino uses the exact same interface.


With this interface you can get text from a file you send in a binary post.
But there are also other endpoints which classify attachments in detail by the way.


For a full reference of Tika REST check this link ->
https://cwiki.apache.org/confluence/display/TIKA/TikaServer

But in our case we just want to send a plain request to get the text from an image.

With Tesseract installed, Tika does support image formats.


This interface can be used from command-line or from your own applications -- provided you find a way to send binary data.

The Lotus Script HTTP request class currently does not support sending binary data.

And it would be much more efficient if running the extract on server side.


But this is a general free option you can leverage in your applications not just for scanning images.

You can use Tika for your needs. But you need your own instance running on a different port (because the embedded instance is currently only usable by Domino FT indexing).



curl -T invoice.png  
http://localhost:9998/rmeta/text | jq
curl -T invoice.png
http://localhost:9998/rmeta/text | jq -r '.[0]."X-TIKA:content"'


Domino Tika supporting image indexing


But this does not only work on command-line. Once you installed Tesseract, Notes/Domino can also index images in attachments.


You could install Tesseract OCR on Linux and have Domino Tika use image processing.


To get this working you also have to include those attachment types to FT indexing.
They are disabled by default because Tika cannot process them. So they are not send to Tika for indexing.

But there is a way to have your own list of attachment types to be specified.


In my testing Tesseract make the CPU quite busy for a couple of test attachments.

Top showed that Tika invoked it multiple times in parallel during attachment re-indexing (updall -x db.nsf)



FT_USE_ATTACHMENT_WHITE_LIST=1

FT_INDEX_FILTER_ATTACHMENT_TYPES=*.jpg,*.png,*.pdf,*.pptx,*.ppt



Building a container image with Tesseract support


The Domino container project supports adding Linux packages. Sadly the package is not available on Redhat UBI.

But you can use Ubuntu as the base image of your container and just have the packages added at build time.



./build.sh menu -from=ubuntu -linuxpkg "tesseract-ocr tesseract-ocr-eng tesseract-ocr-deu"




Running your own Tika Server with Tesseract support


Here is a simple test using an Ubuntu docker container.

This could be turned into an own container image eventually.

You also need the download of the Tika server separately. But in a Domino container you would already have Tika installed.


docker run --rm -it ubuntu bash

apt update

apt install -y openjdk-21-jdk curl jq tesseract-ocr tesseract-ocr-eng tesseract-ocr-deu



Alpine Linux would be the better choice for a container


Alpine also supports Tesseract. But does also not include Tika directly.
Here is a simple command line to install it. Alpine is much lighter from the packages installed as you will notice when you run those commands.



docker run --rm -it alpine sh

apk update
apk add openjdk21 curl jq tesseract-ocr tesseract-ocr-data-eng tesseract-ocr-data-deu




My Conclusion & your feedback


I would not add Tesseract to a Domino server for Tika and change the Tika indexing globally.

This was just to show how far we could go. And maybe HCL wants to look into the Tesseract option in some way.

It could be also built into Notes/Domino itself to allow text extraction from images.


I would look into Tika as a separate service you use for your own applications and leave FT indexing alone for now.

Tika itself with or without this extension is another tool in your arsenal for building cool applications.


The tika-server.jar comes with every Notes client and Domino server.
You could run it for your own applications today under your control.
The only challenge is really to send binary data post requests to Tika.


Local Tesseract support on a server could invoke the binary like Tika does.
Or you could use their C lib to add it to your own C-API based soltutions.


I thought about building a DSAPI filter to provide Tika functionality.

And I would be interested to hear if this would make sense from your point of view.

I already have LibCurl code to talk to Tika from a performance troubleshooting project.

It can run on databases to extract attachment data and write the results into a document.


This blog post is to raise the awareness for Tika and Tesseract.
And might be food for thought for your own integrations and requirements.


Did anyone work with Tesseract and/or Tika outside Domino before?

What is your feedback and what are your use cases?


I could build a simple Docker container for reuse putting both components into one new Tika service.

But again it is more a challenge how to access it from Notes/Domino



 Docker  LLM  NVIDIA 

Docker Desktop LLM support with NVIDIA

Daniel Nashed – 5 May 2025 10:48:17

After my first and simple test, I updated Docker Desktop on my lab machine.
There is a GPU option in the experimental settings showing up when you have a matching GPU.


Once enabled it, the following command loads the model into the NVIDIA GPU


docker model run ai/qwen3:0.6B-Q4_0


To check details I ran a Docker container with Ubuntu also using the GPU.


Beside nvidia-smi to check the card driver version I installed "nvtop" (which is part of Ubuntu) to see the GPU performance.


docker run --gpus all -it --rm ubuntu bash


Looking a bit deeper into the installed binaries you can indeed see that Docker also leverages the llama.cpp project (
https://github.com/ggml-org/llama.cpp)

Interestingly the GPU option mentions additional software will be downloaded. But so far I only found those files:



Directory of C:\Program Files\Docker\Docker\resources\model-runner\bin


05/05/2025  12:32    
         .
05/05/2025  12:32    
         ..
05/05/2025  12:31                71 com.docker.llama-server.digest

05/05/2025  12:31         1.838.320 com.docker.llama-server.exe

05/05/2025  12:31            44.784 com.docker.nv-gpu-info.exe

05/05/2025  12:31           481.520 ggml-base.dll

05/05/2025  12:31           492.272 ggml-cpu.dll

05/05/2025  12:31            65.776 ggml.dll

05/05/2025  12:31         1.212.144 llama.dll

              7 File(s)      4.134.887 bytes


Image:Docker Desktop LLM support with NVIDIA


Image:Docker Desktop LLM support with NVIDIA


 Docker  LLM 

Docker - a new player in the LLM business

Daniel Nashed – 5 May 2025 09:30:28

Docker has a new feature in beta. Running models on Docker.
There is not much information about the underlaying technology used.

But during installation you can see that it installs a llama-server (which Ollama and also Domino IQ are using).


Here is a link to the official documentation -->
https://docs.docker.com/model-runner/
Docker provides a registry for models.  For example:
https://hub.docker.com/r/ai/qwen3

To pull a model you just use the new model command. The following is a good small model to test.


docker model pull ai/qwen3:0.6B-Q4_0



Once downloaded you can list models


docker model list

MODEL NAME          PARAMETERS  QUANTIZATION    ARCHITECTURE  MODEL ID      CREATED     SIZE

ai/qwen3            8.19 B      IQ2_XXS/Q4_K_M  qwen3         79fa56c07429  4 days ago  4.68 GiB

ai/qwen3:0.6B-Q4_0  751.63 M    Q4_0            qwen3         df9f2a333a63  4 days ago  441.67 MiB



There are multiple ways to access the AI components.


1. Command Line


Form command line you can just start a model very very similar to what Ollama does


docker model run ai/qwen3:0.6B-Q4_0



2. Within containers


From within containers you can just use the API end-points against:
http://model-runner.docker.internal/

For example the OpenAI end-point:


POST /engines/llama.cpp/v1/chat/completions



3. Docker Socket


curl --unix-socket $HOME/.docker/run/docker.sock \

   localhost/exp/vDD4.40/engines/llama.cpp/v1/chat/completions



4. Expose a TCP socket on Docker host loopback interface


curl
http://localhost:12434/engines/llama.cpp/v1/chat/completions


First look results


This looks like a great new option to run LLM models.


For my first test it looked like it was not using my GPU.

But even on my very old Thinkpad (will test with the new GPU machine) the performance with this small model was OK.


This just the beginning and there is more to discover. I just took a quick peek into it.


There is more to discover. Alone the integration into the registry and having everything from one vendor is interesting.

In addition it is part of the Docker stack and companies would not need to use an open source project like Ollama directly.


This sounds like a smart Docker move to me.


Below are some screen shots from my test this morning.



Image:Docker - a new player in the LLM business


Image:Docker - a new player in the LLM business

Image:Docker - a new player in the LLM business


Handling Markdown in Notes/Domino

Daniel Nashed – 1 May 2025 19:13:54

Markdown is a simplified format which many modern applications support.
All GitHub projects use Markdown. LLMs return formatted text in Markdown.

There are multiple formats. The standard format is CommonMark, which supports extensions.
One of the most well known extension is Git Flavored Markdown (GFM).


Markdown -> CommonMark

https://commonmark.org/
https://github.com/commonmark/cmark


GitHub Flavored Markdown Spec

https://github.github.com/gfm/#tables-extension-


GitHub provides a way to convert Markdown via a REST API

curl -X POST -H "Content-Type: application/json" -d '{"text":"# Hello World", "mode":"gfm"}' https://api.github.com/markdown

Pandoc is an interesting tool

Pandoc is one of the best tools

pandoc -f html -t markdown hello.html


Flexmark

But the best way currently is Flexmark which is also used by the Domino REST API

https://github.com/vsch/flexmark-java

There is also a C implementation. But the Java Lib can be integrated in applications.
I implemented a Java Script Lib converting Markdown to HTML.



How to best store Markdown and HTML?

Multipart MIME is a good format to store multiple formats
  • text/plain
  • text/html
     
There is a RFC for text/mardown. But probably the better way today for best compatibility is to store markdown in text/plain.
The plain text can be used later to edit it in markdown and convert it again.


For now this sounds like the best way to store Markdown and HTML.
This would be future proof and can be permanently stored natively in Notes documents.


HTML does not have a CSS out of the box

Converting from Markdown to HTML is pretty straightforward.
The bigger challenge is formatting the HTML with a proper CSS.

The full GitHub CSS for Markdown is not compatible with Notes.
For now I am using a simple CSS which looks pretty OK.
But there is room for improvements and we are still working on it.

Once we found  better CSS we can reconvert the documents any time.
We still have the original Markdown in the plain text representation and can apply any CSS.

There is still work to do for the CSS part. But this is already a good first step.


Image:Handling Markdown in Notes/Domino

Ollama - latest small LLMs which work well on modern CPUs

Daniel Nashed – 1 May 2025 16:29:14

Domino IQ does now support external LLMs. One of the best ways to run LLMs is Ollama.


Running on an NVIDIA CPU is the best performing way.

Running on a Mac Apple Silicon is probably the second best option.


But if you choose a small model, the performance is quite OK on a modern CPU.

It should be a modern CPU like Intel 12 or 13 generation.


The following recent models will work well.


For external LLMs you will need TLS, which can be implemented with a simple NGINX reverse proxy setup for example.


qwen3


ollama run qwen3:0.6b

ollama run qwen3:1.7b


the 0.6b model is really small but quite OK. And very fast. But the next bigger one is still also OK from performance point of view.

https://ollama.com/library/qwen3


granite3.3


ollama run granite3.3:2b


https://ollama.com/library/granite3


gemma3


ollama run gemma3:1b


https://ollama.com/library/gemma3

GL.iNet Slate 7 Wi-Fi 7 Travel Router - A class of it’s own

Daniel Nashed – 1 May 2025 23:40:33


At home my preferred router is still a Fritzbox.
But I was always looking for a good OpenWRT router.

OpenWRT is an interesting platform. An open source Linux platform, which brings interesting options.

OK running a Docker host on it isn't a good idea (yes I tried that, but the box does not have sufficient RAM for that).

The USB connections isn't the fastest. But everything else on this router is really great.

My plan is to use this router at DNUG conference in June for a Domino IQ Lab environment for 30 people.

The router is a Wifi7 router. But does not have the the 6Ghz band. But it's pretty cool.

OK the router has double the price of the previous model. But it has some impressive features.

It gets a bit warm and I would not run it permanently. It's a travel and lab router.

But for that use case it is pretty awesome. They have bigger models for home/office use.

- OpenWrt 23.05 (Kernel 5.4.213)

- Dual 2.5G Ports freely configurable as WAN or LAN ports

- USB Type-C PD Power Port

- 100 Wifi devices

- VPN services including WireGuard build-in


- Wi-Fi Speed: 688Mbps (2.4GHz), 2882Mbps (5GHz)

- CPU: Qualcomm Quad-core @1.1GHz

- Memory / Storage: 1GB DDR4 / NAND Flash 512MB


The guy is running NGINX for the web server which you can use to add your own web configurations.

It comes with a flexible DHCP and and DNS server. So you can simply add your local hosts for example in a lab environment.


Oh what I did not mention yet.. It comes with a touch display to show statistics, configuration and let's you turn on and off services.

Even it is new, they current have a good discount.


https://store-eu.gl-inet.com/products/slate-7-gl-be3600-dual-band-wi-fi-7-travel-router


Image: GL.iNet Slate 7 Wi-Fi 7 Travel Router - A class of it’s own



 GitHub 

GitHub flows to build releases

Daniel Nashed – 1 May 2025 22:37:03

There are are couple of OpenSource projects I am working on.
Some of them require to build Linux binaries.  Those binaries would depend on glibc.


For all projects which don't require Domino C-API integration. an Alpine based build can help to build static binaries, which should run on all current Linux distributions.
But still those binaries would need to be build somewhere and published as a release.


The first addition I made is a build container I am currently testing with.
This build option is also available on GitHub in Action flows.

GitHub does not support a Apline native. But in a Ubuntu build environment you can run a docker based Alpine build environment.

This is pretty cool. You can build C++ applications and also Go applications (which I will need soon for another project).


I have created a test repository to play around with GitHub actions first -->
https://github.com/nashcom/buil-test
Here is a sample release.yml.


This is pretty awesome and available with a free GitHub account!


I have started to add it to the first projects.
The Domino Start Script project uses it.

And also the Domino Borg Backup project uses it.
I also added it to the Domino container image to automatically install it when you select the build option.

There are more projects to update and there is more to come..


-- Daniel


     uses: actions/checkout@v4


   - name: Set up Go

     uses: actions/setup-go@v5

     with:

       go-version: '1.21'

         
   - name: Prepare

     run: |

       mkdir -p dist/linux-bin


   - name: Build Go binary

     run: |

       go build -gccgoflags=-static -o dist/linux-bin/hello_go go/hello.go


   - name: Build C++ binary (Alpine static)

     run: |

       docker run --rm \

         -v "$PWD":/src \

         -w /src/cpp \

         alpine:latest \

         sh -c "apk add --no-cache g++ musl-dev make && make"

       cp cpp/hello dist/linux-bin/hello_cpp


   - name: Create add-on package

     run: |

      TAR_NAME=hello-$(cat version.txt).taz

      cd dist
      tar -cvzf "../$TAR_NAME" *

      cd ..

      sha256sum "$TAR_NAME" | cut -f1 -d" " > checksum.txt


   - name: Upload Release Assets

     uses: softprops/action-gh-release@v1

     with:

       files: |

         *.taz

         checksum.txt
     env:

       GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}



Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]